10 research outputs found

    Automated colonoscopy withdrawal phase duration estimation using cecum detection and surgical tasks classification

    Get PDF
    Colorectal cancer is the third most common type of cancer with almost two million new cases worldwide. They develop from neoplastic polyps, most commonly adenomas, which can be removed during colonoscopy to prevent colorectal cancer from occurring. Unfortunately, up to a quarter of polyps are missed during colonoscopies. Studies have shown that polyp detection during a procedure correlates with the time spent searching for polyps, called the withdrawal time. The different phases of the procedure (cleaning, therapeutic, and exploration phases) make it difficult to precisely measure the withdrawal time, which should only include the exploration phase. Separating this from the other phases requires manual time measurement during the procedure which is rarely performed. In this study, we propose a method to automatically detect the cecum, which is the start of the withdrawal phase, and to classify the different phases of the colonoscopy, which allows precise estimation of the final withdrawal time. This is achieved using a Resnet for both detection and classification trained with two public datasets and a private dataset composed of 96 full procedures. Out of 19 testing procedures, 18 have their withdrawal time correctly estimated, with a mean error of 5.52 seconds per minute per procedure

    Polyp detection on video colonoscopy using a hybrid 2D/3D CNN

    Get PDF
    Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information

    Spatio-temporal classification for polyp diagnosis

    Get PDF
    Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets

    Identifying key mechanisms leading to visual recognition errors for missed colorectal polyps using eye-tracking technology

    Get PDF
    BACKGROUND AND AIMS: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, has scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (7 trainees, 4 medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye tracking technology. Cognitive errors occurred when lesions were observed but not recognised as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network (CNN). RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%) , with a significantly higher proportion in trainees (P=0.0264). In the video validation, the CNN detected significantly more polyps than trainees and medical students, with per polyp sensitivities of 79.5%, 30.0% and 15.4% respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created

    Computer aided characterization of early cancer in Barrett's esophagus on i-scan magnification imaging - Multicenter international study

    Get PDF
    BACKGROUND AND AIMS: We aimed to develop a computer aided characterization system that can support the diagnosis of dysplasia in Barrett's esophagus (BE) on magnification endoscopy. METHODS: Videos were collected in high-definition magnification white light and virtual chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic/ non-dysplastic BE (NDBE) from 4 centres. We trained a neural network with a Resnet101 architecture to classify frames as dysplastic or non-dysplastic. The network was tested on three different scenarios: high-quality still images, all available video frames and a selected sequence within each video. RESULTS: 57 different patients each with videos of magnification areas of BE (34 dysplasia, 23 NDBE) were included. Performance was evaluated using a leave-one-patient-out cross-validation methodology. 60,174 (39,347 dysplasia, 20,827 NDBE) magnification video frames were used to train the network. The testing set included 49,726 iscan-3/optical enhancement magnification frames. On 350 high-quality still images the network achieved a sensitivity of 94%, specificity of 86% and Area under the ROC (AUROC) of 96%. On all 49,726 available video frames the network achieved a sensitivity of 92%, specificity of 82% and AUROC of 95%. On a selected sequence of frames per case (total of 11,471 frames) we used an exponentially weighted moving average of classifications on consecutive frames to characterize dysplasia. The network achieved a sensitivity of 92%, specificity of 84% and AUROC of 96% The mean assessment speed per frame was 0.0135 seconds (SD, + 0.006) CONCLUSION: Our network can characterize BE dysplasia with high accuracy and speed on high-quality magnification images and sequence of video frames moving it towards real time automated diagnosis

    A new artificial intelligence system successfully detects and localises early neoplasia in Barrett's esophagus by using convolutional neural networks

    Get PDF
    BACKGROUND AND AIMS: Seattle protocol biopsies for Barrett's Esophagus (BE) surveillance are labour intensive with low compliance. Dysplasia detection rates vary, leading to missed lesions. This can potentially be offset with computer aided detection. We have developed convolutional neural networks (CNNs) to identify areas of dysplasia and where to target biopsy. METHODS: 119 Videos were collected in high-definition white light and optical chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic and non-dysplastic BE (NDBE). We trained an indirectly supervised CNN to classify images as dysplastic/non-dysplastic using whole video annotations to minimise selection bias and maximise accuracy. The CNN was trained using 148,936 video frames (31 dysplastic patients, 31 NDBE, two normal esophagus), validated on 25,161 images from 11 patient videos and tested on 264 iscan-1 images from 28 dysplastic and 16 NDBE patients which included expert delineations. To localise targeted biopsies/delineations, a second directly supervised CNN was generated based on expert delineations of 94 dysplastic images from 30 patients. This was tested on 86 i-scan one images from 28 dysplastic patients. FINDINGS: The indirectly supervised CNN achieved a per image sensitivity in the test set of 91%, specificity 79%, area under receiver operator curve of 93% to detect dysplasia. Per-lesion sensitivity was 100%. Mean assessment speed was 48 frames per second (fps). 97% of targeted biopsy predictions matched expert and histological assessment at 56 fps. The artificial intelligence system performed better than six endoscopists. INTERPRETATION: Our CNNs classify and localise dysplastic Barrett's Esophagus potentially supporting endoscopists during surveillance

    Computer-assisted cancer detection in gastrointestinal endoscopy using deep learning

    No full text
    Gastrointestinal cancer, including colorectal and oesophageal cancer, accounts for over 35% of cancer-related deaths worldwide. It is possible to identify these diseases at an early stage during endoscopic examinations, as well as facilitate prompt treatment to enhance patient outcomes. For colorectal cancer, deep-learning approaches have shown increases in polyp detection rates during colonoscopies. However, most of these systems are trained using static images, whilst, in clinical practice, the procedure is conducted on a real-time video feed. Moreover, enhanced polyp detection rates may result in the identification of benign polyps, leading to unnecessary increased time and cost. Consequently, there is a growing demand for accompanying tools to characterize polyps in order to determine which ones require resection. Recent optical diagnosis deep-learning approaches have shown promising results assisting with this task, yet polyp appearance during a procedure can vary, making automatic predictions unstable. Patients with Barrett’s esophagus, a recognized precursor to adenocarcinoma in esophageal cancer, undergo gastroscopies to diagnose and treat early dysplasia. Studies indicate that up to 25% of early cases are missed during gastroscopies. Although deep-learning approaches have been investigated as decision-support tools, the complexity of the task hinders their translation to clinical practice; the lesions are often subtle, evading human eye detection, and the data presents logistical difficulties as it comprises lengthy videos with limited lesion variation. This thesis explores deep-learning methods to bridge the gap between the development and translation of computer-aided detection and diagnosis tools for early cancer detection in endoscopy. In colonoscopies, temporal information and features from videos are harnessed to solve low-stability challenges and improve performance in clinical scenarios. In gastroscopies, prior knowledge is utilised during data preparation and model training to reduce overfitting and obtain more generalisable solutions

    Detección y Clasificación de Tejidos Anómalos en Mamografías Digitales Mediante Redes Neuronales Convolucionales.

    No full text
    El cáncer de mama constituye un problema de salud global que supone más del 25% de los nuevos casos de cáncer en mujeres y en el que la detección precoz mediante la realización de mamografías juega un papel fundamental. Este trabajo presenta un sistema novedoso de detección y clasificación de anomalías en imágenes mamográficas mediante redes neuronales convolucionales (CNN). Se trata de un sistema ambicioso que permite distinguir entre cinco clases de mamografías: sin anomalías, con masas tumorales benignas, con masas tumorales malignas, con microcalcificaciones benignas o con microcalcificaciones malignas. Este trabajo evalúa no solo la precisión de las CNN aplicadas a este problema concreto, sino también la influencia de otros parámetros como la inclusión de una etapa de mejora de la calidad de la imagen o la resolución y cantidad de imágenes utilizadas para entrenar la red

    Spatio-temporal Classification for Polyp Diagnosis - supplementary material.mp4

    No full text
    Visual examples of the type of video clips used in the paper called "Spatio-temporal Classification for Polyp Diagnosis". These are short videos of polyps to be classified as adenomas or non-adenomas

    Polyp characterisation using deep learning and a publicly accessible polyp video database

    No full text
    OBJECTIVE: Convolutional neural networks (CNN) for computer-aided diagnosis (CADx) of polyps are often trained using high-quality still images in a single chromoendoscopy imaging modality with sessile serrated lesions (SSL) often excluded. This study developed a CNN from videos to classify polyps as adenomatous or non-adenomatous using standard narrow-band imaging (NBI) and NBI-near focus (NBI-NF) and created a publicly accessible polyp video database. METHODS: We trained a CNN with 16,832 high and moderate-quality frames from 229 polyp videos (56 SSLs). It was evaluated with 222 polyp videos (36 SSLs) across two test-sets. Test-set I consists of 14,320 frames (157 polyps, 111 diminutive). Test-set II, which is publicly accessible, 3,317 video frames (65 polyps, 41 diminutive) which was benchmarked with three expert and three non-expert endoscopists. RESULTS: Sensitivity for adenoma characterisation was 91.6% in test-set I and 89.7% in test-set II. Specificity was 91.9% and 88.5%. Sensitivity for diminutive polyps was 89.9% and 87.5%; specificity 90.5% and 88.2%. In NBI-NF, sensitivity was 89.4% and 89.5%, with a specificity of 94.7% and 83.3%. In NBI, sensitivity was 85.3 % and 91.7%, with a specificity of 87.5% and 90.0%, respectively. The CNN achieved PIVI-1 and PIVI-2 thresholds for each test-set. In the benchmarking of test-set II, the CNN was significantly more accurate than non-experts (13.8% difference (95%CI 3.2-23.6), p=0.01) with no significant difference with experts. CONCLUSIONS: A single CNN can differentiate adenomas from SSL and hyperplastic polyps in both NBI and NBI-NF. A publicly accessible NBI polyp video database was created and benchmarked
    corecore